Skip to content

[RemoveLayoutConversions]: Reduce loop carried values - part 2 #4921

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

etiotto
Copy link
Contributor

@etiotto etiotto commented Aug 19, 2025

TBD

@etiotto etiotto self-assigned this Aug 19, 2025
@etiotto
Copy link
Contributor Author

etiotto commented Aug 19, 2025

Depends on PR #4915.

@etiotto etiotto requested a review from Copilot August 20, 2025 14:02
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR implements functionality to reduce loop carried values in the RemoveLayoutConversions pass by eliminating unnecessary loop-carried tensor pointer values when they can be reconstructed from other values plus layout conversions.

  • Adds a new reduceLoopCarriedValues() method that identifies loop arguments that can be eliminated
  • Implements logic to replace operations using eliminated values with equivalent operations using rematerialized values plus convert layout operations
  • Updates the backward rematerialization process to include the loop carried value reduction optimization

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
RemoveLayoutConversions.cpp Implements the core logic for reducing loop carried values by adding the new method and integrating it into the backward rematerialization process
backward_combine_dpas_dot_layout.mlir Adds test cases to verify the optimization correctly reduces loop carried values and generates expected layout conversions

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.


// Replace the loop result corresponding to the argument with an
// equivalent loop result.
OpResult rematRes = loopOp.getTiedLoopResult(cast<BlockArgument>(val));
Copy link
Preview

Copilot AI Aug 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cast(val) is redundant and potentially unsafe. The variable val is already confirmed to be of type Value from the rematMapping, but the code already has arg which is the properly cast BlockArgument from line 1020. Use arg instead of casting val.

Suggested change
OpResult rematRes = loopOp.getTiedLoopResult(cast<BlockArgument>(val));
OpResult rematRes = loopOp.getTiedLoopResult(arg);

Copilot uses AI. Check for mistakes.

.Default([](auto op) {
llvm::report_fatal_error(llvm::Twine(
"Unsupported operation in backward rematerialization: '" +
op->getName().getStringRef() + "'"));
Copy link
Preview

Copilot AI Aug 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error message in the Default case uses string concatenation with Twine which may not work as expected. Consider using llvm::formatv or constructing the error message differently: llvm::formatv("Unsupported operation in backward rematerialization: '{0}'", op->getName().getStringRef())

Suggested change
op->getName().getStringRef() + "'"));
llvm::report_fatal_error(
llvm::formatv("Unsupported operation in backward rematerialization: '{0}'", op->getName().getStringRef()));

Copilot uses AI. Check for mistakes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant